15 research outputs found

    Robot manipulator skill learning and generalising through teleoperation

    Get PDF
    Robot manipulators have been widely used for simple repetitive, and accurate tasks in industrial plants, such as pick and place, assembly and welding etc., but it is still hard to deploy in human-centred environments for dexterous manipulation tasks, such as medical examination and robot-assisted healthcare. These tasks are not only related to motion planning and control but also to the compliant interaction behaviour of robots, e.g. motion control, force regulation and impedance adaptation simultaneously under dynamic and unknown environments. Recently, with the development of collaborative robotics (cobots) and machine learning, robot skill learning and generalising have attained increasing attention from robotics, machine learning and neuroscience communities. Nevertheless, learning complex and compliant manipulation skills, such as manipulating deformable objects, scanning the human body and folding clothes, is still challenging for robots. On the other hand, teleoperation, also namely remote operation or telerobotics, has been an old research area since 1950, and there have been a number of applications such as space exploration, telemedicine, marine vehicles and emergency response etc. One of its advantages is to combine the precise control of robots with human intelligence to perform dexterous and safety-critical tasks from a distance. In addition, telepresence allows remote operators could feel the actual interaction between the robot and the environment, including the vision, sound and haptic feedback etc. Especially under the development of various augmented reality (AR), virtual reality (VR) and wearable devices, intuitive and immersive teleoperation have received increasing attention from robotics and computer science communities. Thus, various human-robot collaboration (HRC) interfaces based on the above technologies were developed to integrate robot control and telemanipulation by human operators for robot skills learning from human beings. In this context, robot skill learning could benefit teleoperation by automating repetitive and tedious tasks, and teleoperation demonstration and interaction by human teachers also allow the robot to learn progressively and interactively. Therefore, in this dissertation, we study human-robot skill transfer and generalising through intuitive teleoperation interfaces for contact-rich manipulation tasks, including medical examination, manipulating deformable objects, grasping soft objects and composite layup in manufacturing. The introduction, motivation and objectives of this thesis are introduced in Chapter 1. In Chapter 2, a literature review on manipulation skills acquisition through teleoperation is carried out, and the motivation and objectives of this thesis are discussed subsequently. Overall, the main contents of this thesis have three parts: Part 1 (Chapter 3) introduces the development and controller design of teleoperation systems with multimodal feedback, which is the foundation of this project for robot learning from human demonstration and interaction. In Part 2 (Chapters 4, 5, 6 and 7), we studied primitive skill library theory, behaviour tree-based modular method, and perception-enhanced method to improve the generalisation capability of learning from the human demonstrations. And several applications were employed to evaluate the effectiveness of these methods.In Part 3 (Chapter 8), we studied the deep multimodal neural networks to encode the manipulation skill, especially the multimodal perception information. This part conducted physical experiments on robot-assisted ultrasound scanning applications.Chapter 9 summarises the contributions and potential directions of this thesis. Keywords: Learning from demonstration; Teleoperation; Multimodal interface; Human-in-the-loop; Compliant control; Human-robot interaction; Robot-assisted sonography

    Composite dynamic movement primitives based on neural networks for human–robot skill transfer

    Get PDF
    In this paper, composite dynamic movement primitives (DMPs) based on radial basis function neural networks (RBFNNs) are investigated for robots’ skill learning from human demonstrations. The composite DMPs could encode the position and orientation manipulation skills simultaneously for human-to-robot skills transfer. As the robot manipulator is expected to perform tasks in unstructured and uncertain environments, it requires the manipulator to own the adaptive ability to adjust its behaviours to new situations and environments. Since the DMPs can adapt to uncertainties and perturbation, and spatial and temporal scaling, it has been successfully employed for various tasks, such as trajectory planning and obstacle avoidance. However, the existing skill model mainly focuses on position or orientation modelling separately; it is a common constraint in terms of position and orientation simultaneously in practice. Besides, the generalisation of the skill learning model based on DMPs is still hard to deal with dynamic tasks, e.g., reaching a moving target and obstacle avoidance. In this paper, we proposed a composite DMPs-based framework representing position and orientation simultaneously for robot skill acquisition and the neural networks technique is used to train the skill model. The effectiveness of the proposed approach is validated by simulation and experiments

    A review on manipulation skill acquisition through teleoperation-based learning from demonstration

    Get PDF
    Manipulation skill learning and generalization have gained increasing attention due to the wide applications of robot manipulators and the spurt of robot learning techniques. Especially, the learning from demonstration method has been exploited widely and successfully in the robotic community, and it is regarded as a promising direction to realize the manipulation skill learning and generalization. In addition to the learning techniques, the immersive teleoperation enables the human to operate a remote robot with an intuitive interface and achieve the telepresence. Thus, it is a promising way to transfer manipulation skills from humans to robots by combining the learning methods and the teleoperation, and adapting the learned skills to different tasks in new situations. This review, therefore, aims to provide an overview of immersive teleoperation for skill learning and generalization to deal with complex manipulation tasks. To this end, the key technologies, e.g. manipulation skill learning, multimodal interfacing for teleoperation and telerobotic control, are introduced. Then, an overview is given in terms of the most important applications of immersive teleoperation platform for robot skill learning. Finally, this survey discusses the remaining open challenges and promising research topics

    Adaptive compliant skill learning for contact-rich manipulation with human in the loop

    Get PDF
    It is essential for the robot manipulator to adapt to unexpected events and dynamic environments while executing the physical contact-rich tasks. Although a range of methods have been investigated to enhance the adaptability and generalization capability of robot manipulation, it is still difficult to perform complex contact-rich tasks, e.g., rolling pizza dough and robot-assisted medical scanning, without the assistance from a human in the loop. We proposed a novel framework combining learning from demonstration (LfD) and human experience to enhance the safety and adaptability of the robot manipulation. In this framework, dynamic movement primitives (DMPs) is employed for manipulation skills learning from demonstrations, and human correction is applied to update the pre-trained DMPs skills model. We conducted experiments on the Franka Emika Panda Robot with pizza dough rolling tasks. The results demonstrate that the proposed framework could effectively improve the performance of the physical contact-rich tasks, and the human correction method through teleoperation provides a potential solution for advanced interaction tasks with complex and dynamic physical properties

    Design and Quantitative Assessment of Teleoperation-Based Human–Robot Collaboration Method for Robot-Assisted Sonography

    Get PDF
    Tele-echography has emerged as a promising and effective solution, leveraging the expertise of sonographers and the autonomy of robots to perform ultrasound scanning for patients residing in remote areas, without the need for in-person visits by the sonographer. Designing effective and natural human-robot interfaces for tele-echography remains challenging, with patient safety being a critical concern. In this article, we develop a teleoperation system for robot-assisted sonography with two different interfaces, a haptic device-based interface and a low-cost 3D Mouse-based interface, which can achieve continuous and intuitive telemanipulation by a leader device with a small workspace. To achieve compliant interaction with patients, we design impedance controllers in Cartesian space to track the desired position and orientation for these two teleoperation interfaces. We also propose comprehensive evaluation metrics of robot-assisted sonography, including subjective and objective evaluation, to evaluate tele-echography interfaces and control performance. We evaluate the ergonomic performance based on the estimated muscle fatigue and the acquired ultrasound image quality. We conduct user studies based on the NASA Task Load Index to evaluate the performance of these two human-robot interfaces. The tracking performance and the quantitative comparison of these two teleoperation interfaces are conducted by the Franka Emika Panda robot. The results and findings provide guidance on human-robot collaboration design and implementation for robot-assisted sonography. Note to Practitioners —Robot-assisted sonography has demonstrated efficacy in medical diagnosis during clinical trials. However, deploying fully autonomous robots for ultrasound scanning remains challenging due to various constraints in practice, such as patient safety, dynamic tasks, and environmental uncertainties. Semi-autonomous or teleoperation-based robot sonography represents a promising approach for practical deployment. Previous work has produced various expensive teleoperation interfaces but lacks user studies to guide teleoperation interface selection. In this article, we present two typical teleoperation interfaces and implement a continuous and intuitive teleoperation control system. We also propose a comprehensive evaluation metric for assessing their performance. Our findings show that the haptic device outperforms the 3D Mouse, based on operators’ feedback and acquired image quality. However, the haptic device requires more learning time and effort in the training stage. Furthermore, the developed teleoperation system offers a solution for shared control and human-robot skill transfer. Our results provide valuable guidance for designing and implementing human-robot interfaces for robot-assisted sonography in practice

    Composite dynamic movement primitives based on neural networks for human–robot skill transfer

    Get PDF
    In this paper, composite dynamic movement primitives (DMPs) based on radial basis function neural networks (RBFNNs) are investigated for robots’ skill learning from human demonstrations. The composite DMPs could encode the position and orientation manipulation skills simultaneously for human-to-robot skills transfer. As the robot manipulator is expected to perform tasks in unstructured and uncertain environments, it requires the manipulator to own the adaptive ability to adjust its behaviours to new situations and environments. Since the DMPs can adapt to uncertainties and perturbation, and spatial and temporal scaling, it has been successfully employed for various tasks, such as trajectory planning and obstacle avoidance. However, the existing skill model mainly focuses on position or orientation modelling separately; it is a common constraint in terms of position and orientation simultaneously in practice. Besides, the generalisation of the skill learning model based on DMPs is still hard to deal with dynamic tasks, e.g., reaching a moving target and obstacle avoidance. In this paper, we proposed a composite DMPs-based framework representing position and orientation simultaneously for robot skill acquisition and the neural networks technique is used to train the skill model. The effectiveness of the proposed approach is validated by simulation and experiments

    A human‐robot collaboration method for uncertain surface scanning

    Get PDF
    Robots are increasingly expected to replace humans in many repetitive and high‐precision tasks, of which surface scanning is a typical example. However, it is usually difficult for a robot to independently deal with a surface scanning task with uncertainties in, for example the irregular surface shapes and surface properties. Moreover, it usually requires surface modelling with additional sensors, which might be time‐consuming and costly. A human‐robot collaboration‐based approach that allows a human user and a robot to assist each other in scanning uncertain surfaces with uniform properties, such as scanning human skin in ultrasound examination is proposed. In this approach, teleoperation is used to obtain the operator's intent while allowing the operator to operate remotely. After external force perception and friction estimation, the orientation of the robot end‐effector can be autonomously adjusted to keep as perpendicular to the surface as possible. Force control enables the robotic manipulator to maintain a constant contact force with the surface. And hybrid force/motion control ensures that force, position, and pose can be regulated without interfering with each other while reducing the operator's workload. The proposed method is validated using the Elite robot to perform a mock B‐ultrasound scanning experiment

    Distributed observer-based prescribed performance control for multi-robot deformable object cooperative teleoperation

    Get PDF
    In this paper, a distributed observer-based prescribed performance control method is proposed for using a multi-robot teleoperation system to manipulate a common deformable object. To achieve a stable position-tracking effect and realize the desired cooperative operational performance, we first define a new hybrid error matrix for both the relative distances and absolute positions of robots and then decompose the matrix into two new error terms for cooperative and independent robot control. Then, we improve the Kelvin-Voigt (K-V) contact model based on the new error terms. Because the center position and deformation of the object cannot be measured, the object dynamics are then expressed by the relative distances of robots and an equivalent impedance term. Each robot incorporates an observer to estimate contact force and object dynamics based on its own measurements. To address the position errors caused by biases in force estimation and realize the position-tracking effect of each robot, we improve the barrier Lyapunov functions (BLFs) by incorporating the errors into system control. which allows us to achieve a predefined position-tracking effect. We conduct an experiment to verify the proposed controller’s ability in a dual-telerobot cooperative manipulation task, even when the object is subjected to unknown disturbances. Note to Practitioners —This article is inspired by the limitations of multi-telerobot manipulation with a deformable object, where the deformation of the object cannot be measured directly. Meanwhile, force sensors, especially 6-axis force sensors, are very expensive. To realize the purpose that objects manipulated by multiple robots match the same state as operated on the leader side, we propose an object-centric teleoperation framework based on the estimates of contact forces and object dynamics and the improved barrier Lyapunov functions (BLFs). This framework contributes to two aspects in practice: 1) propose a control diagram for deformable object co-teleoperation of multi-robots for unmeasurable object’s centre position and deformation; 2) propose an improved BLFs controller based on the estimation of contact force and robot dynamics. The estimation errors are considered and transferred using an equivalent impedance to be integrated into the Lyapunov function to minimize both force and motion-tracking errors. The experimental results verify the effectiveness of the proposed method. The developed framework can be used in industrial applications with a similar scenario

    Impedance Learning for Human-Guided Robots in Contact With Unknown Environments

    Get PDF
    Previous works have developed impedance control to increase safety and improve performance in contact tasks, where the robot is in physical interaction with either an environment or a human user. This article investigates impedance learning for a robot guided by a human user while interacting with an unknown environment. We develop automatic adaptation of robot impedance parameters to reduce the effort required to guide the robot through the environment, while guaranteeing interaction stability. For nonrepetitive tasks, this novel adaptive controller can attenuate disturbances by learning appropriate robot impedance. Implemented as an iterative learning controller, it can compensate for position dependent disturbances in repeated movements. Experiments demonstrate that the robot controller can, in both repetitive and nonrepetitive tasks: first, identify and compensate for the interaction, second, ensure both contact stability (with reduced tracking error) and maneuverability (with less driving effort of the human user) in contact with real environments, and third, is superior to previous velocity-based impedance adaptation control methods

    Impedance learning for human-guided robots in contact with unknown environments

    Get PDF
    Previous works have developed impedance control to increase safety and improve performance in contact tasks, where the robot is in physical interaction with either an environment or a human user. This article investigates impedance learning for a robot guided by a human user while interacting with an unknown environment. We develop automatic adaptation of robot impedance parameters to reduce the effort required to guide the robot through the environment, while guaranteeing interaction stability. For nonrepetitive tasks, this novel adaptive controller can attenuate disturbances by learning appropriate robot impedance. Implemented as an iterative learning controller, it can compensate for position dependent disturbances in repeated movements. Experiments demonstrate that the robot controller can, in both repetitive and nonrepetitive tasks: first, identify and compensate for the interaction, second, ensure both contact stability (with reduced tracking error) and maneuverability (with less driving effort of the human user) in contact with real environments, and third, is superior to previous velocity-based impedance adaptation control methods
    corecore